S2-Net: Self-Supervision Guided Feature Representation Learning for Cross-Modality Images

نویسندگان

چکیده

Dear Editor, This letter focuses on combining the respective advantages of cross-modality images which can compensate for lack information in single modality. Meanwhile, due to great appearance differences between image pairs, it often fails make feature representations correspondences as close possible. In this letter, we design a representation learning network, S2-Net, is based recently successful detect-and-describe pipeline, originally proposed visible but adapted work with pairs. Extensive experiments show that our elegant formulation combined optimization supervised and self-supervised outperforms state-of-the-arts three cross-modal datasets.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Representation Learning for Cross-Modality Classification

Differences in scanning parameters or modalities can complicate image analysis based on supervised classification. This paper presents two representation learning approaches, based on autoencoders, that address this problem by learning representations that are similar across domains. Both approaches use, next to the data representation objective, a similarity objective to minimise the differenc...

متن کامل

Deblocking Joint Photographic Experts Group Compressed Images via Self-learning Sparse Representation

JPEG is one of the most widely used image compression method, but it causes annoying blocking artifacts at low bit-rates. Sparse representation is an efficient technique which can solve many inverse problems in image processing applications such as denoising and deblocking. In this paper, a post-processing method is proposed for reducing JPEG blocking effects via sparse representation. In this ...

متن کامل

Self-Supervision for Reinforcement Learning

Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquit...

متن کامل

Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach

Relation extraction is a fundamental task in information extraction. Most existing methods have heavy reliance on annotations labeled by human experts, which are costly and time-consuming. To overcome this drawback, we propose a novel framework, REHESSION, to conduct relation extractor learning using annotations from heterogeneous information source, e.g., knowledge base and domain heuristics. ...

متن کامل

Feature Representation for Cross-Lingual, Cross- Media Semantic Web Applications

Currently, ontology development has been mostly directed at the representation of domain knowledge (i.e., classes, relations and instances) and much less at the representation of corresponding text and image features. To allow for cross-media knowledge markup, a richer representation of features is needed. At present, such information is mostly missing or represented only in a very impoverished...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE/CAA Journal of Automatica Sinica

سال: 2022

ISSN: ['2329-9274', '2329-9266']

DOI: https://doi.org/10.1109/jas.2022.105884